7 research outputs found

    Real-time GPU-based software beamformer designed for advanced imagingmethods research

    Get PDF
    High computational demand is known to be a technical hurdle for real-timeimplementation of advanced methods like synthetic aperture imaging (SAI) andplane wave imaging (PWI) that work with the pre-beamform data of each arrayelement. In this paper, we present the development of a software beamformer forSAI and PWI with real-time parallel processing capacity. Our beamformer designcomprises a pipelined group of graphics processing units (GPU) that are hostedwithin the same computer workstation. During operation, each available GPU isassigned to perform demodulation and beamforming for one frame of pre-beamformdata acquired from one transmit firing (e.g. point firing for SAI). Tofacilitate parallel computation, the GPUs have been programmed to treat thecalculation of depth pixels from the same image scanline as a block ofprocessing threads that can be executed concurrently, and it would repeat thisprocess for all scanlines to obtain the entire frame of image data i.e.low-resolution image (LRI). To reduce processing latency due to repeated accessof each GPU's global memory, we have made use of each thread block's fast-sharedmemory (to store an entire line of pre-beamform data during demodulation),created texture memory pointers, and utilized global memory caches (to streamrepeatedly used data samples during beamforming). Based on this beamformerarchitecture, a prototype platform has been implemented for SAI and PWI, and itsLRI processing throughput has been measured for test datasets with 40 MHzsampling rate, 32 receive channels, and imaging depths between 5-15 cm. Whenusing two Fermi-class GPUs (GTX-470), our beamformer can compute LRIs of512-by-255 pixels at over 3200 fps and 1300 fps respectively for imaging depthsof 5 cm and 15 cm. This processing throughput is roughly 3.2 times higher than aTesla-class GPU (GTX-275). © 2010 IEEE.published_or_final_versionThe 2010 IEEE International Ultrasonics Symposium, San Diego, CA., 11-14 October 2010. In Proceedings of IEEE IUS, 2010, p. 1920-192

    GPU-based beamformer: Fast realization of plane wave compounding and synthetic aperture imaging

    Get PDF
    Although they show potential to improve ultrasound image quality, plane wave (PW) compounding and synthetic aperture (SA) imaging are computationally demanding and are known to be challenging to implement in real-time. In this work, we have developed a novel beamformer architecture with the real-time parallel processing capacity needed to enable fast realization of PW compounding and SA imaging. The beamformer hardware comprises an array of graphics processing units (GPUs) that are hosted within the same computer workstation. Their parallel computational resources are controlled by a pixel-based software processor that includes the operations of analytic signal conversion, delay-and-sum beamforming, and recursive compounding as required to generate images from the channel-domain data samples acquired using PW compounding and SA imaging principles. When using two GTX-480 GPUs for beamforming and one GTX-470 GPU for recursive compounding, the beamformer can compute compounded 512 × 255 pixel PW and SA images at throughputs of over 4700 fps and 3000 fps, respectively, for imaging depths of 5 cm and 15 cm (32 receive channels, 40 MHz sampling rate). Its processing capacity can be further increased if additional GPUs or more advanced models of GPU are used. © 2011 IEEE.published_or_final_versio

    Multi-channel pre-beamformed data acquisition system for research on advanced ultrasound imaging methods

    Get PDF
    The lack of open access to the pre-beamformed data of an ultrasound scanner has limited the research of novel imaging methods to a few privileged laboratories. To address this need, we have developed a pre-beamformed data acquisition (DAQ) system that can collect data over 128 array elements in parallel from the Ultrasonix series of research-purpose ultrasound scanners. Our DAQ system comprises three system-level blocks: 1) a connector board that interfaces with the array probe and the scanner through a probe connector port; 2) a main board that triggers DAQ and controls data transfer to a computer; and 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board memory. This system can acquire pre-beamformed data with 12-bit resolution when using a 40-MHz sampling rate. It houses a 16 GB RAM buffer that is sufficient to store 128 channels of pre-beamformed data for 8000 to 25 000 transmit firings, depending on imaging depth; corresponding to nearly a 2-s period in typical imaging setups. Following the acquisition, the data can be transferred through a USB 2.0 link to a computer for offline processing and analysis. To evaluate the feasibility of using the DAQ system for advanced imaging research, two proof-of-concept investigations have been conducted on beamforming and plane-wave B-flow imaging. Results show that adaptive beamforming algorithms such as the minimum variance approach can generate sharper images of a wire cross-section whose diameter is equal to the imaging wavelength (150 μm in our example). Also, planewave B-flow imaging can provide more consistent visualization of blood speckle movement given the higher temporal resolution of this imaging approach (2500 fps in our example). © 2012 IEEE.published_or_final_versio

    A modified synthetic aperture imaging approach with axial motion compensation

    No full text
    IEEE Ultrasonics SymposiumSynthetic aperture (SA) imaging provides an alternate mean of obtaining ultrasound images. However, since this approach is based on coherent summation of low-resolution images (LRIs) acquired from different point sources along an array, its image quality may be degraded if motion is present in between firings. In this work, we report a modified SA imaging scheme that can compensate for the effects of axial motion on the image quality. The scheme first acquires data by making use of an interleaved firing sequence where a center-point firing is carried out in between each point-source firing. It then estimates the mean axial shift between two LRIs by performing a crosscorrelation analysis on the raw channel data of successive centerpoint firings. The LRI of each point-source firing is then axially counter-shifted by the estimated shift value to compensate for possible aberration during SA image formation. To test out our proposed scheme, we conducted a tissue-cyst phantom imaging experiment, where raw channel data was acquired using our interleaved firing scheme for 97 virtual point sources laterally spaced apart at 0.3mm and axially located at 10mm behind the probe. This data acquisition procedure was repeated for a range of inter-firing probe displacements (15-90 μm) introduced via a motion stage. From the acquired data, motion-compensated SA images were formed using our modified image formation method, and their contrast was compared to those formed without motion compensation. Results show that our proposed scheme reduced the amount of blurring seen in SA images when uniform axial motion is present during data acquisition. Without motion compensation, the contrast level for the phantom cysts can drop by 15-20dB (relative to the SA image taken without motion) for the range of inter-firing displacements examined. When our motion compensation strategy was applied, this contrast drop was less than 5dB. © 2008 IEEE.link_to_subscribed_fulltex

    A least-squares vector flow estimator for synthetic aperture imaging

    No full text
    This paper presents a least-squares (LS) vector flow estimator that is intended to calculate axial and lateral velocities using aperture-domain Doppler data from multiple transmit point sources and multiple receive apertures realized via the synthetic aperture (SA) technique. Our new estimator comprises two main stages. First, for each transmit point and receive aperture, we obtain aperture-domain Doppler ensembles and use them to compute a frequency estimate (via the lag-one autocorrelator) for every spatial point in imaging view. Subsequently, by noting that each transmit point and receive aperture deviates in flow angle, we estimate flow vectors through creating a set of Doppler equations with two unknowns (i.e. axial and lateral velocities) from the frequency estimates and solving this equation set as an over-determined LS problem. To evaluate the performance of new estimator, Field II simulations were performed for a scenario with a 5mm-diameter steady flow tube (tube angle: 0°-90°, center velocity: 0-25cm/s). For these simulations, a 5.5MHz linear array with 128 elements was used; pre-beamform data was acquired for 97 virtual point sources (0.3mm spacing, 32 elements), and 12 two-cycle pulses were fired through each point source (PRF: 5kHz). Our LS estimator was then applied to a few different transmit-receive SA imaging configurations (e.g. 97 transmit points and 3 receive apertures). Results show that, if a multi-transmit configuration is used, the LS estimator is more capable of providing vector flow maps that roughly resemble the theoretical profile. ©2009 IEEE.link_to_subscribed_fulltex

    Towards integrative learning in biomedical engineering: A project course on electrocardiogram monitor design

    No full text
    This paper presents the development of a guided project course that aims to help biomedical engineering students integrate technical concepts in electric circuits, biomedical instrumentation, and human physiology. Our course involves the design of an electrocardiogram monitor from scratch, and it differs from other lab-based courses that comprise a series of individual experiments. There are three project stages in this course: 1) breadboard prototype design, 2) device fabrication using printed circuit board techniques, and 3) pilot experimentation. Each stage has its own aligned set of objectives, learning activities, and assessment exercises for students to achieve or complete. Course surveys have been conducted at the end of the first two stages to solicit ongoing feedback from students. The survey results indicate that many students have high learning interest in our course mainly because it is the first time for them to build a medical device and use it to perform experiments. Many have agreed that the course has helped to develop their critical thinking and problem solving skills, although there are concerns over its heavy workload as compared to lecture-based courses. ©2009 IEEE.link_to_subscribed_fulltex

    Design of a multi-channel pre-beamform data acquisition system for an ultrasound research scanner

    No full text
    Access to the pre-beamform data of each array channel on an ultrasound scanner is important to experimental investigations on advanced imaging research topics like adaptive beamforming and synthetic aperture imaging. Through such data access, we can obtain in-vitro or in-vivo insights on various imaging methods without resorting to hardware implementation. This paper reports the development of a pre-beamform data acquisition (DAQ) system that can collect data from 128 array elements in parallel. Our DAQ system is intended to interface with a Sonix-RP research scanner through a probe connector port. It comprises three major blocks: 1) a connector board that interfaces with the array probe and the scanner; 2) a main board that triggers data acquisition and controls data transfer to a computer; 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board DDR2 memory. The probe-connecting end of this system is interfaced with TX810 chips to facilitate switching between transmit and receive modes. When receiving data, the incoming analog signal is first passed into a low-noise amplifier. These signals are then sent into AD9272 chips on the four receiver boards to perform time gain compensation, anti-alias filtering, and 12-bit data sampling at an 80MHz rate. Subsequently, the digitized signal samples are de-serialized using a Virtex-5 FPGA and are stored into DDR2 memory. To facilitate data retrieval, we implemented a Virtex-5 FPGA on the main board to initiate data transfer, and if preferred, to perform onboard beamforming (programmable by user) prior to sending data to a computer through a USB 2.0 link. For our prototype, we used 16GB of DDR2 memory for data storage. With a frame rate of 35Hz, a 10cm depth-of-view and 128 transmit firings per frame, this DAQ system is capable of collecting pre-beamform data from all array channels for 2.5 seconds. We are currently completing the prototype development in collaboration with Ultrasonix. ©2009 IEEE.link_to_subscribed_fulltex
    corecore